80 research outputs found

    Computing with Infinite and Infinitesimal Numbers in Matlab easily: The Grossone-Light Toolbox

    Get PDF
    The introduction of the novel numeral system introduced in [1] (based on the notion of grossone) opens new frontiers in numerical computing, allowing to easily perform computations involving infinite and infinitesimal numbers in a numerical way. This work is aimed at making this powerful numeral system easily usable and accessible to the large community of Matlab users. We have implemented (using pure Matlab code) the GrossoneLight Matlab Toolbox, a collection of classes, functions and examples that make grossone-based computing straightforward. The toolbox is called "light" because it introduces some limitations to the more general numeral system discussed in [1]. In particular, only numbers made of integer powers of grossone can be represented, together with bound on the minimum and maximum number of such powers. However, even in presence of such limitations, the implemented numeral system is powerful enough to solve basic numerical linear algebra problems. Following the Matlab object-oriented abstraction paradigm, available in latest Matlab releases, we have been able to implement two classes: the GrossNumber class and the GrossArray class. The first class allows to represent a number made of integer grossone powers, where the coecient used as multiplier for each power is a standard double-precision Matlab oating-point number. The GrossNumber class has been equipped with basic operations (addition, multiplication, etc) by operator overloading. This allows to operate on GrossNumber objects as any other Matlab scalar variable. The GrossArray class has been introduced to handle operations on arrays of GrossNumber objects more eciently. The speedup can be signicant, especially when the code is written in a vectorized fashion and a GPGPU (General Purpose Graphics Processing Units) is available on the machine running the toolbox

    The Big-M method with the numerical infinite M

    Get PDF
    AbstractLinear programming is a very well known and deeply applied field of optimization theory. One of its most famous and used algorithms is the so called Simplex algorithm, independently proposed by KantoroviÄŤ and Dantzig, between the end of the 30s and the end of the 40s. Even if extremely powerful, the Simplex algorithm suffers of one initialization issue: its starting point must be a feasible basic solution of the problem to solve. To overcome it, two approaches may be used: the two-phases method and the Big-M method, both presenting positive and negative aspects. In this work we aim to propose a non-Archimedean and non-parametric variant of the Big-M method, able to overcome the drawbacks of its classical counterpart (mainly, the difficulty in setting the right value for the constant M). We realized such extension by means of the novel computational methodology proposed by Sergeyev, known as Grossone Methodology. We have validated the new algorithm by testing it on three linear programming problems

    Multiobjective evolutionary optimization of quadratic Takagi-Sugeno fuzzy rules for remote bathymetry estimation

    Get PDF
    In this work we tackle the problem of bathymetry estimation using: i) a multispectral optical image of the region of interest, and ii) a set of in situ measurements. The idea is to learn the relation that between the reflectances and the depth using a supervised learning approach. In particular, quadratic Takagi-Sugeno fuzzy rules are used to model this relation. The rule base is optimized by means of a multiobjective evolutionary algorithm. To the best of our knowledge this work represents the first use of a quadratic Takagi-Sugeno fuzzy system optimized by a multiobjective evolutionary algorithm with bounded complexity, i.e., able to control the complexity of the consequent part of second-order fuzzy rules. This model has an outstanding modeling power, without inheriting the drawback of complexity due to the use of quadratic functions (which have complexity that scales quadratically with the number of inputs). This opens the way to the use of the proposed approach even for medium/high dimensional problems, like in the case of hyper-spectral images

    The Algorithmic Numbers in Non-Archimedean Numerical Computing Environments

    Get PDF
    There are many natural phenomena that can best be described by the use of infinitesimal and infinite numbers (see e.g. [1, 5, 13, 23]. However, until now, the Non-standard techniques have been applied to theoretical models. In this paper we investigate the possibility to implement such models in numerical simulations. First we define the field of Euclidean numbers which is a particular eld of hyperreal numbers. Then, we introduce a set of families of Euclidean numbers, that we have called altogether algorithmic numbers, some of which are inspired by the IEEE 754 standard for floating point numbers. In particular, we suggest three formats which are relevant from the hardware implementation point of view: the Polynomial Algorithmic Numbers, the Bounded Algorithmic Numbers and the Truncated Algorithmic Numbers. In the second part of the paper, we show a few applications of such numbers

    Exploiting Posit Arithmetic for Deep Neural Networks in Autonomous Driving Applications

    Get PDF
    This paper discusses the introduction of an integrated Posit Processing Unit (PPU) as an alternative to Floating-point Processing Unit (FPU) for Deep Neural Networks (DNNs) in automotive applications. Autonomous Driving tasks are increasingly depending on DNNs. For example, the detection of obstacles by means of object classification needs to be performed in real-time without involving remote computing. To speed up the inference phase of DNNs the CPUs on-board the vehicle should be equipped with co-processors, such as GPUs, which embed specific optimization for DNN tasks. In this work, we review an alternative arithmetic that could be used within the co-processor. We argue that a new representation for floating point numbers called Posit is particularly advantageous, allowing for a better trade-off between computation accuracy and implementation complexity. We conclude that implementing a PPU within the co-processor is a promising way to speed up the DNN inference phase

    A Semi-Supervised Learning-Aided Evolutionary Approach to Occupational Safety Improvement

    Get PDF
    Worldwide, four people die every minute as a consequence of illnesses and accidents at work. This considerable number makes occupational safety an important research area aimed at obtaining safer and safer workplaces. This paper presents a semi-supervised learning-aided evolutionary approach to improve occupational safety by classifying workers depending on their own risk perception for the task assigned. More in detail, a semi-supervised learning phase is carried out to initialize a good population of a non-dominated sorting genetic algorithm (NSGA-II). Each chromosome of the population represents a pair of classifiers: one determines a worker's risk perception with respect to a task, the other determines the level of caution of the same worker for the same task. Learning from constraints reinforces the initial training performance. The best Pareto-optimal solution to the problem is selected by means of the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS). The proposed framework was tested on real-world data gathered through a website purposely developed. Results showed a good performance of the obtained classifiers, thus validating the effectiveness of the proposed approach in supporting the decision-maker in critical job assignment problems, where risks are a serious threat to the workers' health

    Solving the Environmental Economic Dispatch Problem with Prohibited Operating Zones in Microgrids using NSGA-II and TOPSIS

    Get PDF
    This paper presents a multi-objective optimization framework for the environmental economic dispatch problem in microgrids. Besides classic constraints, also prohibited operating zones and ramp-rate limits of the generators are here considered. Pareto-optimal solutions are generated through the NSGA-II algorithm with customized constraint handling. The optimal solution is selected with TOPSIS. Simulations carried out on a prototype microgrid showed the eectiveness of the proposed framework in handling scenarios with Pareto fronts having up to four discontinuities

    Decision support for counter-piracy operations: analysis of correlations between attacks and METOC conditions using machine learning techniques

    Get PDF
    Correlation between Meteorological and Oceanographic (METOC) data and sea piracy attacks in the Horn of Africa/Indian Ocean area is assessed and optimally exploited by using a machine learning approach based on the concept of a one-class classifier. The trained algorithms and METOC forecast models are used as inputs to forecast the piracy risk related to environmental conditions over the region of interest. Performance evaluation strategies are provided to assess the goodness of piracy risk maps used in daily counter piracy operation support. The research, through a rigorous analytical/statistical approach, confirms the existence of the correlation between METOC and sea piracy attacks and the algorithm evaluation procedure shows that the machine learning approach to the piracy risk prediction outperforms the classical threshold based method of modeling piracy group operational limits

    Adaptive Sampling Using Fleets of Gliders in the Presence of Fixed Buoys: a Prototype Built Upon the MyOcean Service

    Get PDF
    In the last decade the use of fleets of gliders has proven to be an effective way for sampling the ocean for long-duration missions (order of months). In a previous study [1] a method for adaptive sampling the ocean using fleets of gliders based on the use of a clustering algorithm has been introduced. The key ideas were: i) build a 2D mesh grid over the synoptic uncertainty of the ocean field to sample with “knots” having density proportional to the level of the uncertainty; ii) group this set of knots using a clustering algorithm, i.e. the Fuzzy C-Means (a fuzzy variant of the well-known K-Means algorithm). The centroids are the next way-points for the gliders. However, that method assumed all-maneuverable assets. In this study we extend it by exploiting the existence of non-maneuverable assets, i.e. fixed buoys (a situation that frequently occurs in real scenarios) and by considering time-dependent uncertainty, i.e. aiming to reach the way-points at time t such that the uncertainty at future times is minimized. The first essential idea is to consider the positions of fixed buoys as part of the centroids to obtain from the clustering algorithm: the remaining centroids to be computed will be considered as the next positions where to send each glider. By using the clustering algorithm described in [2], called “Partially Provided Centroids Fuzzy C-Means” (PPC-FCM), we have been able to exploit the presence of fixed buoys by sending the gliders in regions not already covered by the buoys/floats. This allows a better distribution (lower overlapping) of the sensing assets, with respect to the direct use of the standard Fuzzy C-Means, uninformed of the presence of the buoys. The second idea is to replace the synoptic uncertainty field by the field of mutual information between the way points at time t and a selected future time. We have built a prototype of this novel adaptive sampling scheme for mixed assets (maneuverable and non-maneuverable) that automatically retrieves ocean forecasts (currents, temperature, salinity, etc.) from MyOcean services. In addition, the prototype comes with a graphical user interfaces that facilitates the selection of the region of interest for data download. Once the data have been downloaded with low efforts, the (PPC-FCM) algorithm is run to get the next gliders way-points. The procedure is then repeated any time new forecasts are available. Our tool will be even more effective if MyOcean forecast products in future releases contain, other than the expected (mean) value of the field of interest obtained from forecasting models, a measure of the associated uncertainty, such as standard deviations. By including this uncertainty estimate, glider mission planners would have valuable information on where to send the assets in order to reduce the uncertainty as much as possible. REFERENCES [1] Cococcioni, et. al., «SONGs: Self Organizing Network of Gliders for Adaptive Sampling of the Ocean», Maritime Rapid Environmental Assessment Conference, October 18-22, Lerici, Italy, 2010 [2] Cococcioni, «Clustering in the presence of partially provided centroids: a fuzzy approach», Technical Report, Department of Information Engineering, Pisa, 2014

    Lexicographic multi-objective linear programming using grossone methodology: Theory and algorithm

    Get PDF
    Numerous problems arising in engineering applications can have several objectives to be satisfied. An important class of problems of this kind is lexicographic multi-objective problems where the first objective is incomparably more important than the second one which, in its turn, is incomparably more important than the third one, etc. In this paper, Lexicographic Multi-Objective Linear Programming (LMOLP) problems are considered. To tackle them, traditional approaches either require solution of a series of linear programming problems or apply a scalarization of weighted multiple objectives into a single-objective function. The latter approach requires finding a set of weights that guarantees the equivalence of the original problem and the single-objective one and the search of correct weights can be very time consuming. In this work a new approach for solving LMOLP problems using a recently introduced computational methodology allowing one to work numerically with infinities and infinitesimals is proposed. It is shown that a smart application of infinitesimal weights allows one to construct a single-objective problem avoiding the necessity to determine finite weights. The equivalence between the original multi-objective problem and the new single-objective one is proved. A simplex-based algorithm working with finite and infinitesimal numbers is proposed, implemented, and discussed. Results of some numerical experiments are provided
    • …
    corecore